The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Neural Radiance Fields (NeRF) have demonstrated superior novel view synthesis performance but are slow at rendering. To speed up the volume rendering process, many acceleration methods have been proposed at the cost of large memory consumption. To push the frontier of the efficiency-memory trade-off, we explore a new perspective to accelerate NeRF rendering, leveraging a key fact that the viewpoint change is usually smooth and continuous in interactive viewpoint control. This allows us to leverage the information of preceding viewpoints to reduce the number of rendered pixels as well as the number of sampled points along the ray of the remaining pixels. In our pipeline, a low-resolution feature map is rendered first by volume rendering, then a lightweight 2D neural renderer is applied to generate the output image at target resolution leveraging the features of preceding and current frames. We show that the proposed method can achieve competitive rendering quality while reducing the rendering time with little memory overhead, enabling 30FPS at 1080P image resolution with a low memory footprint.
translated by 谷歌翻译
Label Shift has been widely believed to be harmful to the generalization performance of machine learning models. Researchers have proposed many approaches to mitigate the impact of the label shift, e.g., balancing the training data. However, these methods often consider the underparametrized regime, where the sample size is much larger than the data dimension. The research under the overparametrized regime is very limited. To bridge this gap, we propose a new asymptotic analysis of the Fisher Linear Discriminant classifier for binary classification with label shift. Specifically, we prove that there exists a phase transition phenomenon: Under certain overparametrized regime, the classifier trained using imbalanced data outperforms the counterpart with reduced balanced data. Moreover, we investigate the impact of regularization to the label shift: The aforementioned phase transition vanishes as the regularization becomes strong.
translated by 谷歌翻译
The rapid development of aspect-based sentiment analysis (ABSA) within recent decades shows great potential for real-world society. The current ABSA works, however, are mostly limited to the scenario of a single text piece, leaving the study in dialogue contexts unexplored. In this work, we introduce a novel task of conversational aspect-based sentiment quadruple analysis, namely DiaASQ, aiming to detect the sentiment quadruple of target-aspect-opinion-sentiment in a dialogue. DiaASQ bridges the gap between fine-grained sentiment analysis and conversational opinion mining. We manually construct a large-scale, high-quality Chinese dataset and also obtain the English version dataset via manual translation. We deliberately propose a neural model to benchmark the task. It advances in effectively performing end-to-end quadruple prediction and manages to incorporate rich dialogue-specific and discourse feature representations for better cross-utterance quadruple extraction. We finally point out several potential future works to facilitate the follow-up research of this new task. The DiaASQ data is open at https://github.com/unikcc/DiaASQ
translated by 谷歌翻译
时间基础旨在找到目标视频时刻,该目标瞬间与未修剪视频中给定的句子查询相对应。但是,最近的作品发现现有方法遇到了严重的时间偏见问题。这些方法并不是根据训练集中查询的时间偏见过度依赖基于视觉文本语义对齐的目标矩位置。为此,本文提出了一个新颖的培训框架,用于接地模型,以使用洗牌视频解决时间偏见问题而不会失去接地精度。我们的框架介绍了两个辅助任务,即跨模式匹配和时间订单歧视,以促进接地模型训练。跨模式匹配任务利用了洗牌和原始视频之间的内容一致性迫使接地模型以挖掘视觉内容以匹配语义的查询。时间秩序歧视任务利用时间顺序的差异来加强对长期时间环境的理解。关于Charades-STA和活动网字幕的广泛实验证明了我们方法可以减轻对时间偏差的依赖并增强模型对不同时间分布的概括能力的有效性。代码可从https://github.com/haojc/shufflingvideosfortsg获得。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
过度参数化的神经网络在复杂数据上具有很大的代表能力,更重要的是产生足够平滑的输出,这对于它们的概括和稳健性至关重要。大多数现有函数近似理论表明,使用足够多的参数,神经网络可以很好地近似于功能值的某些类别的函数。然而,神经网络本身可能是高度平滑的。为了弥合这一差距,我们以卷积残留网络(Rescresnets)为例,并证明大型响应不仅可以在功能值方面近似目标函数,而且还可以表现出足够的一阶平滑度。此外,我们将理论扩展到在低维歧管上支持的近似功能。我们的理论部分证明了在实践中使用深层网络的好处。提供了关于对抗性鲁棒图像分类的数值实验,以支持我们的理论。
translated by 谷歌翻译
无限尺寸空间之间的学习运营商是机器学习,成像科学,数学建模和仿真等广泛应用中出现的重要学习任务。本文研究了利用深神经网络的Lipschitz运营商的非参数估计。 Non-asymptotic upper bounds are derived for the generalization error of the empirical risk minimizer over a properly chosen network class.在假设目标操作员表现出低维结构的情况下,由于训练样本大小增加,我们的误差界限衰减,根据我们估计中的内在尺寸,具有吸引力的快速速度。我们的假设涵盖了实际应用中的大多数情况,我们的结果通过利用操作员估算中的低维结构来产生快速速率。我们还研究了网络结构(例如,网络宽度,深度和稀疏性)对神经网络估计器的泛化误差的影响,并提出了对网络结构的选择来定量地最大化学习效率的一般建议。
translated by 谷歌翻译
在本文中,我们研究了使用深层学习技术预测外汇货币对未来波动性的问题。我们逐步展示如何通过对白天波动率的经验模式的指导来构建深度学习网络。数值结果表明,与传统的基线(即自回归和GARCH模型)相比,多尺寸长的短期内存(LSTM)模型与多货币对的输入相比一致地实现了最先进的准确性,即自动增加和加入模型其他深度学习模式。
translated by 谷歌翻译
合成数据是一种新兴技术,可以显着加快AI机器学习管道的开发和部署。在这项工作中,我们通过将连续时间随机模型与新提出的签名$ W_1 $公制组合,开发高保真时间序列发生器,SIGWGAN。前者是基于随机微分方程的Logsig-RNN模型,而后者源自通用和原则性的数学特征,以表征时间序列引起的度量。Sigwgan允许在产生高保真样本的同时在监督学习中转向计算上的GaN Min-Max问题。我们验证了由流行的量化风险模型和经验财务数据产生的合成数据的提出模型。代码在https://github.com/sigcgans/sig-wassersein-gans.git上获得。
translated by 谷歌翻译